Gov. Greg Youngkin has on his desk a law that the Virginia legislature passed last month, the High-Risk Artificial Intelligence Developer and Deployer Act, which would require companies that both create and operate "high-risk" AI systems that are used to make consequential decisions in employment, lending, healthcare, housing, insurance and other areas to implement safeguards against algorithmic discrimination.
Youngkin, a Republican, has until March 24 to take action on the bill. If enacted, the Virginia law would join Colorado's inaugural framework to clamp down on algorithmic discrimination in "high-risk" AI technologies, which was put in place last year despite several "reservations" voiced by that state's Democratic governor about its breadth and impact on innovation.
Attorneys predict that these measures will only be the beginning.
"We expect to see more of this legislative activity in states, not less of it, because of the very intentional way that the federal government is withdrawing from the playing field in terms of AI regulation," said Scott Kosnoff, a partner at Faegre Drinker Biddle & Reath LLP.
During his first week in office, President Donald Trump revoked a Biden-era executive order issued in 2023 that aimed to establish standards for developing safe, secure and trustworthy AI systems. In its place, Trump tasked certain government agencies and heads, including the White House's new AI and crypto czar David Sacks, with developing a new AI action plan and to "suspend, revise, or rescind" actions taken pursuant to Biden's order that purportedly pose "barriers to American AI innovation."
Vice President J.D. Vance reiterated these sentiments in remarks delivered last month at the international AI Action Summit in Paris, where he expressed concerns that "excessive regulation of the AI sector could kill a transformative industry just as it's taking off."
These actions have signaled "a sea change within the executive branch with respect to how AI is being treated, and that's going to encourage some states to rush in and fill the perceived void with laws," according to Kosnoff.
"The federal government is saying it's deregulating, while some of the states are saying that they're regulating, which really sets up a tremendous challenge for developers and deployers alike by creating a situation where they'll likely be subject to different regulations depending on what states they're in, which businesses don't like to see, instead of having a uniform federal-based approach to AI," Kosnoff said.
While some states may have been waiting for the federal government to take the lead in AI, the policy shift at the federal level makes it clear that — as states have steadily done since California enacted the first comprehensive privacy law to regulate companies' handling of personal consumer data in 2018 — the time is ripe for them to act.
"States can't hold out much longer," said Natasha Allen, a partner and co-chair of the AI sector at Foley & Lardner LLP. "The reality is AI isn't going anywhere, and states are realizing that waiting is not the answer. Developments are happening every day in this space, and states are recognizing that they have to put in some type of guardrails around this technology."
The push to regulate is likely to create a patchwork similar to the one that's emerged with respect to comprehensive consumer data privacy statutes now on the books in 19 states, which generally share the same core principles but contain important differences in their scope, how they define key terms and what measures they require companies to undertake.
"We are likely to see history repeat itself, and companies are going to be subject to yet another patchwork approach," said Goli Mahdavi, a partner and global AI lead at Bryan Cave Leighton Paisner LLP. "Where there is a patchwork of laws, organizations are generally best served by aligning their compliance and risk mitigation programs with the strictest standard. For now, in the U.S., that is the Colorado AI Act," which is set to take effect in February 2026.
Virginia, which was also the second state to enact a comprehensive data privacy bill, is again at the forefront of this anticipated fragmentation. While the Virginia AI bill "is modeled closely on" and shares the same basic structure as its predecessor in Colorado, "there are notable differences in their scope, definitions and specific requirements," noted Mahdavi.
The proposal in Virignia — which, if enacted, would take effect July 1, 2026 — takes a narrower approach, embracing a more limited definition of high-risk systems, clearer transparency obligations and a less stringent "principal basis" test for determining which types of automated decision-making activities are considered covered high-risk uses.
As one prominent example, while job applicants appeared to be covered by the Virginia bill, the measure departs from the Colorado law by specifically excluding individuals operating in a commercial or employment context from the definition of a consumer, Mahdavi noted.
"So, effectively, both the Colorado AI Act and the Virginia bill apply to AI-driven hiring or screening tools," Mahdavi said. "But, if an employer uses a high-risk AI system for ongoing employee monitoring, as is often the case particularly with remote workforces, the Virginia AI bill would not apply in that context."
The Virginia AI bill also notably includes monitoring, accountability and transparency requirements for both developers and deployers of high-risk AI systems used to make consequential decisions. Specifically, the measure requires developers to exercise "reasonable care" to protect consumers from "known or foreseeable risks" of algorithmic discrimination by taking steps such as providing deployers with information about a system's intended uses, limitations and risks. It also obligates those who use the technology to implement a risk management program, complete impact assessments, and provide consumers with clear disclosures about AI systems' use and offer them explanation, correction and appeal rights for adverse decisions.
While the Colorado bill takes a similar approach, the Virginia bill puts additional pressure on both creators and end users to weed out algorithmic discrimination, while deepening the ongoing debate among policymakers over how responsibility for the output of AI systems should be divided up, according to Allen, the Foley & Lardner partner.
"There's this push and pull over whether it should be incumbent on developers to train and develop secure AI systems or is it something that should be looked at on the back end, since the output may not be what the developer intended," Allen said. "It will be interesting to see the extent to which other legislatures go with this."
These considerations and nuances are likely to mean that, if the Virginia bill is signed into law, it "won't be an easy task to comply with it," said Kosnoff, of Faegre Drinker.
"It's not going to be something that companies can knock out over a three-day weekend," he said. "There are plenty of teeth to the Virginia bill, and they'll have to put a lot of thought and effort into complying with the act."
While states like Connecticut, which fell short of enacting its own AI bias bill last year, are considering proposals that are similar to what Colorado and Virginia have done, other legislatures may elect to take a different approach to regulating this emerging technology, which could further complicate compliance efforts, attorneys said.
"The lens will eventually broaden beyond these algorithmic discrimination bills based on the same collective chassis, given that consumers can be hurt by other issues such as inaccuracies in these models and data privacy concerns, and then all bets are off," Kosnoff said.
The most likely candidate to take this route is California, whose first-in-the-nation consumer data privacy law is also based on a different framework than the laws that came after it.
The Golden State has already enacted several laws to address certain risks posed by the developing technology, including the spread of misinformation and a lack of transparency. This year, the legislature is poised to consider not only an AI bias bill, but also additional groundbreaking AI measures, including proposals aimed at protecting children from safety and privacy risks associated with AI and a revised version of an AI safety bill that was vetoed by the governor last year.
A veto by Virginia's governor of the new AI bias bill could also throw a wrench in these regulatory efforts more broadly, attorneys added.
The measure has faced opposition from groups such as the Center for Democracy and Technology, the American Association of People with Disabilities and the Disability Rights Education & Defense Fund, which submitted public testimony to the state Senate last month calling for the bill to be rejected.
They argued that, while the proposal's "transparency provisions and guardrails appear solid on their merits," and they applauded the measure placing a duty of care on developers to take steps to prevent algorithmic discrimination, the bill as currently drafted "would not accomplish the goals it purports to advance because of numerous loopholes and exemptions that would make it far too easy for companies to opt themselves out of complying with the law."
"Unless those issues are addressed, the bill would suffer the same fate as [New York City's] AI-in-hiring ordinance, which companies have almost entirely ignored," the groups contended.
If the governor ends up sending the legislature back to the drawing board, that would likely impact not only how AI is regulated in Virginia, but it could also transform the overarching approach that states across the nation take to striking the balance between protecting consumers and preserving innovation in the AI realm.
"A veto likely wouldn't dissuade other states," Allen, of Foley & Lardner, said. "What it could do, however, is cause them to look at what the Virginia legislature did that didn't resonate and allow those states to pivot if they're trying to go down the same route."
--Editing by Kelly Duncan.
For a reprint of this article, please contact reprints@law360.com.